Universality Laws for High-Dimensional Learning With Random Features

نویسندگان

چکیده

We prove a universality theorem for learning with random features. Our result shows that, in terms of training and generalization errors, feature model nonlinear activation function is asymptotically equivalent to surrogate linear Gaussian matching covariance matrix. This settles so-called equivalence conjecture based on which several recent papers develop their results. method proving the builds classical Lindeberg approach. Major ingredients proof include leave-one-out analysis optimization problem associated process central limit theorem, obtained via Stein’s method, weakly correlated variables.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Universality in three dimensional random-

We investigate the critical behavior of three-dimensional random-field Ising systems with both Gauss and bimodal distribution of random fields and additional the three-dimensional diluted Ising antiferromagnet in an external field. These models are expected to be in the same universality class. We use exact ground-state calculations with an integer optimization algorithm and by a finite-size sc...

متن کامل

Laws of Large Numbers for Random Linear

The computational solution of large scale linear programming problems contains various difficulties. One of the difficulties is to ensure numerical stability. There is another difficulty of a different nature, namely the original data, contains errors as well. In this paper, we show that the effect of the random errors in the original data has a diminishing tendency for the optimal value as the...

متن کامل

Learning Kernels with Random Features

Randomized features provide a computationally efficient way to approximate kernel machines in machine learning tasks. However, such methods require a user-defined kernel as input. We extend the randomized-feature approach to the task of learning a kernel (via its associated random features). Specifically, we present an efficient optimization problem that learns a kernel in a supervised manner. ...

متن کامل

Strong Laws for Weighted Sums of Negative Dependent Random Variables

In this paper, we discuss strong laws for weighted sums of pairwise negatively dependent random variables. The results on i.i.d case of Soo Hak Sung [9] are generalized and extended.

متن کامل

High-Dimensional Unsupervised Active Learning Method

In this work, a hierarchical ensemble of projected clustering algorithm for high-dimensional data is proposed. The basic concept of the algorithm is based on the active learning method (ALM) which is a fuzzy learning scheme, inspired by some behavioral features of human brain functionality. High-dimensional unsupervised active learning method (HUALM) is a clustering algorithm which blurs the da...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Information Theory

سال: 2023

ISSN: ['0018-9448', '1557-9654']

DOI: https://doi.org/10.1109/tit.2022.3217698